28 research outputs found

    Subword-based Stochastic Segment Modeling for Offline Arabic Handwriting Recognition

    Get PDF
    In this paper, we describe several experiments in which we use a stochastic segment model (SSM) to improve offline handwriting recognition (OHR) performance. We use the SSM to re-rank (re-score) multiple decoder hypotheses. Then, a probabilistic multi-class SVM is trained to model stochastic segments obtained from force aligning transcriptions with the underlying image. We extract multiple features from the stochastic segments that are sensitive to larger context span to train the SVM. Our experiments show that using confidence scores from the trained SVM within the SSM framework can significantly improve OHR performance. We also show that OHR performance can be improved by using a combination of character-based and parts-of-Arabic-words (PAW)-based SSMs

    Facial skin motion properties from video: Modeling and applications

    No full text
    Deformable modeling of facial soft tissues have found use in application domains such as human-machine interaction for facial expression recognition. More recently, such modeling techniques have been used for tasks like age estimation and person identification. This dissertation is focused on development of novel image analysis algorithms to follow facial strain patterns observed through video recording of faces in expressions. Specifically, we use the strain pattern extracted from non-rigid facial motion as a simplified and adequate way to characterize the underlying material properties of facial soft tissues. Such an approach has several unique features. Strain pattern instead of the image intensity is used as a classification feature. Strain is related to biomechanical properties of facial tissues that are distinct for each individual. Strain pattern is less sensitive to illumination differences (between enrolled and query sequences) and face camouflage because the strain pattern of a face remains stable as long as reliable facial deformations are captured. A finite element modeling based method enforces regularization which mitigates issues (such as temporal matching and noise sensitivity) related to automatic motion estimation. Therefore, the computational strategy is accurate and robust. Images or videos of facial deformations are acquired with video camera and without special imaging equipment. Experiments using range images on a dataset consisting of 50 subjects provide the necessary proof of concept that strain maps indeed have a discriminative value. On a video dataset containing 60 subjects undergoing a particular facial expression, experimental results using the computational strategy presented in this work emphasize the discriminatory and stability properties of strain maps across adverse data conditions (shadow lighting and face camouflage). Such properties make it a promising feature for image analysis tasks that can benefit from such auxiliary information about the human face. Strain maps add a new dimension in our abilities to characterize a human face. It also fosters newer ways to capture facial dynamics from video which, if exploited efficiently, can lead to an improved performance in tasks involving the human face. In a subsequent effort, we model the material constants (Young\u27s modulus) of the skin in sub-regions of the face from the motion observed in multiple facial expressions. On a public database consisting of 40 subjects undergoing some set of facial motions, we present an expression invariant strategy to matching faces using the Young\u27s modulus of the skin. Such an efficient way of describing underlying material properties from the displacements observed in video has an important application in deformable modeling of physical objects which are usually gauged by their simplicity and adequacy. The contributions through this work will have an impact on the broader vision community because of its highly novel approaches to the long-standing problem of motion analysis of elastic objects. In addition, the value is the cross disciplinary nature and its focus on applying image analysis algorithms to the rather difficult and important problem of material property characterization of facial soft tissues and their applications. We believe this research provides a special opportunity for the utilization of video processing to enhance our abilities to make unique discoveries through the facial dynamics inherent in video

    Facial skin motion properties from video: Modeling and applications

    Get PDF
    Deformable modeling of facial soft tissues have found use in application domains such as human-machine interaction for facial expression recognition. More recently, such modeling techniques have been used for tasks like age estimation and person identification. This dissertation is focused on development of novel image analysis algorithms to follow facial strain patterns observed through video recording of faces in expressions. Specifically, we use the strain pattern extracted from non-rigid facial motion as a simplified and adequate way to characterize the underlying material properties of facial soft tissues. Such an approach has several unique features. Strain pattern instead of the image intensity is used as a classification feature. Strain is related to biomechanical properties of facial tissues that are distinct for each individual. Strain pattern is less sensitive to illumination differences (between enrolled and query sequences) and face camouflage because the strain pattern of a face remains stable as long as reliable facial deformations are captured. A finite element modeling based method enforces regularization which mitigates issues (such as temporal matching and noise sensitivity) related to automatic motion estimation. Therefore, the computational strategy is accurate and robust. Images or videos of facial deformations are acquired with video camera and without special imaging equipment. Experiments using range images on a dataset consisting of 50 subjects provide the necessary proof of concept that strain maps indeed have a discriminative value. On a video dataset containing 60 subjects undergoing a particular facial expression, experimental results using the computational strategy presented in this work emphasize the discriminatory and stability properties of strain maps across adverse data conditions (shadow lighting and face camouflage). Such properties make it a promising feature for image analysis tasks that can benefit from such auxiliary information about the human face. Strain maps add a new dimension in our abilities to characterize a human face. It also fosters newer ways to capture facial dynamics from video which, if exploited efficiently, can lead to an improved performance in tasks involving the human face. In a subsequent effort, we model the material constants (Young\u27s modulus) of the skin in sub-regions of the face from the motion observed in multiple facial expressions. On a public database consisting of 40 subjects undergoing some set of facial motions, we present an expression invariant strategy to matching faces using the Young\u27s modulus of the skin. Such an efficient way of describing underlying material properties from the displacements observed in video has an important application in deformable modeling of physical objects which are usually gauged by their simplicity and adequacy. The contributions through this work will have an impact on the broader vision community because of its highly novel approaches to the long-standing problem of motion analysis of elastic objects. In addition, the value is the cross disciplinary nature and its focus on applying image analysis algorithms to the rather difficult and important problem of material property characterization of facial soft tissues and their applications. We believe this research provides a special opportunity for the utilization of video processing to enhance our abilities to make unique discoveries through the facial dynamics inherent in video

    Video-Based Person Identification Using Facial Strain Maps as a Biometric

    Get PDF
    Research on video-based face recognition has started getting increased attention in the past few years. Algorithms developed for video have an advantage from the availability of plentitude of frames in videos to extract information from. Despite this fact, most research in this direction has limited the scope of the problem to the application of still image-based approaches to some selected frames on which 2D algorithms are expected to perform well. It can be realized that such an approach only uses the spatial information contained in video and does not incorporate the temporal structure.Only recently has the intelligence community begun to approach the problem in this direction. Video-based face recognition algorithms in the last couple of years attempt to simultaneously use the spatial and temporal information for the recognition of moving faces. A new face recognition method that falls into the category of algorithms that adopt spatio-temporal representation and utilizes dynamic information extracted from video is presented. The method was designed based on the hypothesis that the strain pattern exhibited during facial expression provides a unique fingerprint for recognition. First, a dense motion field is obtained with an optical flow algorithm. A strain pattern is then derived from the motion field. In experiments with 30 subjects, results indicate that strain pattern is an useful biometric, especially when dealing with extreme conditions such as shadow light and face camouflage, for which conventional face recognition methods are expected to fail. The ability to characterize the face using the elastic properties of facial skin opens up newer avenues to the face recognition community in the context of modeling a face using features beyond visible cues

    Heavy quark physics

    No full text

    Facial Strain Pattern as a Soft Forensic Evidence

    No full text
    The success of forensic identification largely depends on the availability of strong evidence or traces that substantiate the prosecution hypothesis that a certain person is guilty of crime. In light of this, extracting subtle evidences which the criminals leave behind at the crime scene will be of valuable help to investigators. We propose a novel method of using strain pattern extracted from changing facial expressions in video as an auxiliary evidence for person identification. The strength of strain evidence is analyzed based on the increase in likelihood ratio it provides in a suspect population. Results show that strain pattern can be used as a supplementary biometric evidence in adverse operational conditions such as shadow lighting and face camouflage where pure intensity-based face recognition algorithms will fail.

    Facial Strain Pattern as a Soft Forensic Evidence

    No full text
    The success of forensic identification largely depends on the availability of strong evidence or traces that substantiate the prosecution hypothesis that a certain person is guilty of crime. In light of this, extracting subtle evidences which the criminals leave behind at the crime scene will be of valuable help to investigators. We propose a novel method of using strain pattern extracted from changing facial expressions in video as an auxiliary evidence for person identification. The strength of strain evidence is analyzed based on the increase in likelihood ratio it provides in a suspect population. Results show that strain pattern can be used as a supplementary biometric evidence in adverse operational conditions such as shadow lighting and face camouflage where pure intensity-based face recognition algorithms will fail.
    corecore